pete warden
Why isn't there more training on the edge? « Pete Warden's blog
One of the most frequent questions I get asked from people exploring machine learning beyond cloud and desktop machines is "What about training?". If you look around at the popular frameworks and use cases of edge ML, most of them seem focused on inference. It isn't obvious why this is the case though, so I decided to collect my notes in a post here, so I can have something to refer to when this comes up (and organize my own thoughts too!). I think the biggest reason that there's not more training on the edge is that most models need to be trained through supervised learning, that is each sample used for training needs a ground truth label. If you're running on a phone or embedded system, there's not likely to be an easy way to attach a label to incoming data, other than running an existing model and guessing.
Machines of Loving Understanding « Pete Warden's blog
It's a reminder of how creepy a world full of devices that blur the line between life and objects could be, but there's also something appealing about connecting more closely to the things we build. Far more insightful people than me have explored these issues, from Mary Shelley to Phillip K. Dick, but the aspect that has fascinated me most is how computers understand us. We live in a world where our machines are wonderful at showing us near-photorealistic scenes in real time, and can even talk to us in convincing voices. Up until recently though, they've not been able to make sense of images or audio that are given to them as inputs. We've been able to synthesize voices for decades, but speech recognition has only really started working well in the last few years.
Why is it so difficult to retrain neural networks and get the same results? « Pete Warden's blog
Last week I had a question from a colleague about reproducibility in TensorFlow, specifically in the 1.14 era. He wanted to be able to run the same training code multiple times and get exactly the same results, which on the surface doesn't seem like an unreasonable expectation. Machine learning training is fundamentally a series of arithmetic operations applied repeatedly, so what makes getting the same results every time so hard? I had the same question when we first started TensorFlow, and I was lucky enough to learn some of the answers from the numerical programming experts on the team, so I want to share a bit of what I discovered. There are good guides to achieving reproducibility out there, but they don't usually include explanations for why all the steps involved are necessary, or why training becomes so slow when you do apply them.
Machine learning at the edge: TinyML is getting big
Is it $61 billion and 38.4% CAGR by 2028 or $43 billion and 37.4% CAGR by 2027? Depends on which report outlining the growth of edge computing you choose to go by, but in the end it's not that different. What matters is that edge computing is booming. There is growing interest by vendors, and ample coverage, for good reason. Although the definition of what constitutes edge computing is a bit fuzzy, the idea is simple.
Machine Learning at the Edge: TinyML Is Getting Big
Is it $61 billion and 38.4% compound annual growth rate (CAGR) by 2028 or $43 billion and 37.4% CAGR by 2027? Depends on which report outlining the growth of edge computing you choose to go by, but in the end it is not that different. What matters is that edge computing is booming. There is growing interest by vendors, and ample coverage, for good reason. Although the definition of what constitutes edge computing is a bit fuzzy, the idea is simple.
TinyML: Machine Learning with TensorFlow Lite on Arduino and Ultra-Low-Power Microcontrollers: Pete Warden, Daniel Situnayake: 9781492052043: Amazon.com: Books
The goal of this book is to show how any developer with basic experience using a command-line terminal and code editor can get started building their own projects running machine learning (ML) on embedded devices. Who Is This Book Aimed At? To build a TinyML project, you will need to know a bit about both machine learning and embedded software development. Neither of these are common skills, and very few people are experts on both, so this book will start with the assumption that you have no background in either of these. The only requirements are that you have some familiarity running commands in the terminal (or Command Prompt on Windows), and are able to load a program source file into an editor, make alterations, and save it.
Data Science vs Engineering: Tension Points
This blog post provides highlights and a full written transcript from the panel, "Data Science Versus Engineering: Does It Really Have To Be This Way?" with Amy Heineike, Paco Nathan, and Pete Warden at Domino HQ. Topics discussed include the current state of collaboration around building and deploying models, tension points that potentially arise, as well as practical advice on how to address these tension points. Recently, I had the opportunity to moderate the panel, "Data Science Versus Engineering: Does It Really Have To Be This Way?" with Amy Heineike, Paco Nathan, and Pete Warden at Domino HQ. As Domino's Head of Content, it is my responsibility to ensure that our content provides a high degree of value, density, and analytical rigor that sparks respectful candid public discourse from multiple perspectives. Discourse that directly addresses challenges, including unsolved problems with high stakes. Discourse that is also anchored in the intention of helping accelerate data ...
- Information Technology > Services (1.00)
- Education (1.00)
TensorFlow Image Recognition on a Raspberry Pi - Silicon Valley Data Science
Editor's note: This post is part of our Trainspotting series, a deep dive into the visual and audio detection components of our Caltrain project. You can find the introduction to the series here. SVDS has previously used real-time, publicly available data to improve Caltrain arrival predictions. However, the station-arrival time data from Caltrain was not reliable enough to make accurate predictions. Using a Raspberry PiCamera and USB microphone, we were able to detect trains, their speed, and their direction. When we set up a new Raspberry Pi in our Mountain View office, we ran into a big problem: the Pi was not only detecting Caltrains (true positive), but also detecting Union Pacific freight trains and the VTA light rail (false positive).
- Transportation > Ground > Rail (0.71)
- Information Technology > Hardware (0.70)